54 research outputs found

    Efficient Smoothed Concomitant Lasso Estimation for High Dimensional Regression

    Full text link
    In high dimensional settings, sparse structures are crucial for efficiency, both in term of memory, computation and performance. It is customary to consider 1\ell_1 penalty to enforce sparsity in such scenarios. Sparsity enforcing methods, the Lasso being a canonical example, are popular candidates to address high dimension. For efficiency, they rely on tuning a parameter trading data fitting versus sparsity. For the Lasso theory to hold this tuning parameter should be proportional to the noise level, yet the latter is often unknown in practice. A possible remedy is to jointly optimize over the regression parameter as well as over the noise level. This has been considered under several names in the literature: Scaled-Lasso, Square-root Lasso, Concomitant Lasso estimation for instance, and could be of interest for confidence sets or uncertainty quantification. In this work, after illustrating numerical difficulties for the Smoothed Concomitant Lasso formulation, we propose a modification we coined Smoothed Concomitant Lasso, aimed at increasing numerical stability. We propose an efficient and accurate solver leading to a computational cost no more expansive than the one for the Lasso. We leverage on standard ingredients behind the success of fast Lasso solvers: a coordinate descent algorithm, combined with safe screening rules to achieve speed efficiency, by eliminating early irrelevant features

    Building up time-consistency for risk measures and dynamic optimization

    Get PDF
    International audienceIn stochastic optimal control, one deals with sequential decision-making under uncertainty; with dynamic risk measures, one assesses stochastic processes (costs) as time goes on and information accumulates. Under the same vocable of time-consistency (or dynamic-consistency), both theories coin two different notions: the latter is consistency between successive evaluations of a stochas-tic processes by a dynamic risk measure (a form of monotonicity); the former is consistency between solutions to intertemporal stochastic optimization problems. Interestingly, both notions meet in their use of dynamic programming, or nested, equations. We provide a theoretical framework that offers i) basic ingredients to jointly define dynamic risk measures and corresponding intertemporal stochastic optimization problems ii) common sets of assumptions that lead to time-consistency for both. We highlight the role of time and risk preferences — materialized in one-step aggregators — in time-consistency. Depending on how one moves from one-step time and risk preferences to intertemporal time and risk preferences, and depending on their compatibility (commutation), one will or will not observe time-consistency. We also shed light on the relevance of information structure by giving an explicit role to a state control dynamical system, with a state that parameterizes risk measures and is the input to optimal policies

    Time-Consistency: from Optimization to Risk Measures

    Get PDF
    Stochastic optimal control is concerned with sequential decision-making under uncertainty. The theory of dynamic risk measures gives values to stochastic processes (costs) as time goes on and information accumulates. Both theories coin, under the same vocable of \emph{time-consistency} (or \emph{dynamic-consistency}), two different notions: the latter is consistency between successive evaluations of a stochastic processes by a dynamic risk measure as information accumulates (a form of monotonicity); the former is consistency between solutions to intertemporal stochastic optimization problems as information accumulates. % Interestingly, time-consistency in stochastic optimal control and time-consistency for dynamic risk measures meet in their use of dynamic programming, or nested, equations. % We provide a theoretical framework that offers i) basic ingredients to jointly define dynamic risk measures and corresponding intertemporal stochastic optimization problems ii) common sets of assumptions that lead to time-consistency for both. Our theoretical framework highlights the role of time and risk preferences, materialized in {one-step aggregators}, in time-consistency. Depending on how you move from one-step time and risk preferences to intertemporal time and risk preferences, and depending on their compatibility (commutation), you will or will not observe time-consistency. We also shed light on the relevance of information structure by giving an explicit role to a state control dynamical system, with a state that parameterizes risk measures and is the input to optimal policies

    Soil Moisture & Snow Properties Determination with GNSS in Alpine Environments: Challenges, Status, and Perspectives

    Get PDF
    Moisture content in the soil and snow in the alpine environment is an important factor, not only for environmentally oriented research, but also for decision making in agriculture and hazard management. Current observation techniques quantifying soil moisture or characterizing a snow pack often require dedicated instrumentation that measures either at point scale or at very large (satellite pixel) scale. Given the heterogeneity of both snow cover and soil moisture in alpine terrain, observations of the spatial distribution of moisture and snow-cover are lacking at spatial scales relevant for alpine hydrometeorology. This paper provides an overview of the challenges and status of the determination of soil moisture and snow properties in alpine environments. Current measurement techniques and newly proposed ones, based on the reception of reflected Global Navigation Satellite Signals (i.e., GNSS Reflectometry or GNSS-R), or the use of laser scanning are reviewed, and the perspectives offered by these new techniques to fill the current gap in the instrumentation level are discussed. Some key enabling technologies including the availability of modernized GNSS signals and GNSS array beamforming techniques are also considered and discussed

    Stress rotations and the long-term weakness of the Median Tectonic Line and the Rokko-Awaji Segment

    Get PDF
    International audienceWe used a field analysis of rock deformation microstructures and mesostructures to reconstructthe long-term orientation of stresses around two major active fault systems in Japan, the Median TectonicLine and the Rokko-Awaji Segment. Our study reveals that the dextral slip of the two fault systems, activesince the Plio-Quaternary, was preceded by fault normal extension in the Miocene and sinistral wrenching inthe Paleogene. The two fault systems deviated the regional stress field at the kilometer scale in their vicinityduring each of the three tectonic regimes. The largest deviation, found in the Plio-Quaternary, is a more faultnormal rotation of the maximum horizontal stress to an angle of 79° with the fault strands, suggesting anextremely low shear stress on the Median Tectonic Line and the Rokko-Awaji Segment. Possible causes of thislong-term stress perturbation include a nearly total release of shear stress during earthquakes, a low staticfriction coefficient, or lowelastic properties of the fault zones comparedwith the country rock. Independently ofthe preferred interpretation, the nearly fault normal orientation of the direction of maximum compressionsuggests that the mechanical properties of the fault zones are inadequate for the buildup of a pore fluidpressure sufficiently elevated to activate slip. The long-term weakness of the Median Tectonic Line and theRokko-Awaji Segment may reside in low-friction/low-elasticity materials or dynamic weakening rather than inpreearthquake fluid overpressures

    Epiconvergence of relaxed stochastic optimization problem

    No full text
    International audienceIn this paper we consider the relaxation of a dynamic stochastic optimization problem where we replace a stochastic constraint - for example an almost sure constraint - by a conditional expectation constraint. We show an epiconvergence result relying on the Kudo convergence of σ\sigma-algebra and continuity of the objective and constraint operators. We also present some classicals constraints in stochastic optimization and give some conditions insuring their continuity. We conclude with a decomposition algorithm that uses such a relaxation

    Generalized adaptive partition-based method for two-stage stochastic linear problems

    No full text
    International audienceAdaptive Partition-based Methods (APM) are numerical methods to solve two-stage stochastic linear problems (2SLP). The core idea is to iteratively construct an adapted partition of the space of alea in order to aggregate scenarios while conserving the true value of the cost-to-go for the current first-stage control. Relying on the normal fan of the dual admissible set, we extend the classical and generalized APM method by i) extending the method to almost arbitrary 2SLP, ii) giving a necessary and sufficient condition for a partition to be adapted even for non-finite distribution, and iii) proving the convergence of the method. We give some additional insights by linking APM to the L-shaped algorithm
    corecore